feat: traceability metrics#484
Conversation
License Check Results🚀 The license check job ran with the Bazel command: bazel run --lockfile_mode=error //src:license-checkStatus: Click to expand output |
|
The created documentation from the pull request is available at: docu-html |
AlexanderLanin
left a comment
There was a problem hiding this comment.
see discussion in #485
|
closed #485 @AlexanderLanin can you please check again? |
MaximilianSoerenPollak
left a comment
There was a problem hiding this comment.
Some comments for things I noticed.
Will take another look after lunch.
Overall looks like genuine good work that we can build upon.
|
Also a general thing. If there is any 'substantial' or partial generation from AI please add the disclaimer in the header. Model name meaning like 'Gpt' or 'Copilot' or 'Claude' etc. |
I will add. But we should think about using tools like https://github.com/git-ai-project/git-ai in general |
Probably might be worth a look to see if we could use that. I can bring it up in infra. |
|
@FScholPer I would propose that I can take these Ideas & Proposals that you have made in this PR and re-implement them into Docs-As-Code so they fit a better architecturally and makes sense with how testcases etc. actually behave & are created inside the DaC frameworks that deal with it. I fear it would be a lot more work to merge this PR and then rebuild it / change it quiet heavily later when we want to integrate it. Do you think this would make sense and is a good way to move forward with this issue? @AlexanderLanin @a-zw please also advice here in your opinion what would be a good approach. |
|
to me it sounds like we need to discuss the goals first, then how to achieve it. Thats the reason why this mixes extensions with custom executables and no-one really understands why and how it should look like. |
Co-authored-by: Maximilian Sören Pollak <maximilian.pollak@qorix.com> Signed-off-by: Frank Scholter Peres(MBTI) <145544737+FScholPer@users.noreply.github.com>
Co-authored-by: Andreas Zwinkau <95761648+a-zw@users.noreply.github.com> Signed-off-by: Frank Scholter Peres(MBTI) <145544737+FScholPer@users.noreply.github.com>
4585a27 to
a364257
Compare
Goals for where 1. check linking of requirements to code/tests in ci 2. add dashboards in all modules. Or what goals do you have in mind? |
|
@FScholPer we've discussed this PR, unfortunately we missed you. So here is our interpretation. Let's have a call next week. The goals here are:
Here is an idea, which simplifies the design: What do you think? (May be not self explanatory) |
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
|
Fixed the new comments. Mostly documentation issues |
Co-authored-by: Alexander Lanin <Alexander.Lanin@etas.com> Signed-off-by: Frank Scholter Peres(MBTI) <145544737+FScholPer@users.noreply.github.com>
…e, source_linker config)
There was a problem hiding this comment.
Note: This review was AI-assisted (Claude).
Summary
The architectural direction of this PR is solid and the iteration since the early feedback shows real progress: metrics are now computed inside Sphinx (single source of truth), exported as a well-defined metrics.json (schema-v1), and consumed by a clean gate script. The shared traceability_metrics.py module avoids duplication between dashboards and CI. The refactoring of standards.py to use generic_pie_* filters is a nice generalization that will pay off as more standards are onboarded. Test coverage for the new modules is good.
I'm organizing findings into three tiers so we can focus the remaining effort.
Must fix before merge
-
//:traceability_gatetarget does not exist for consumer repos. Thedocs()macro indocs.bzldoes not create atraceability_gatetarget. The how-to guide (line 108) andcommands.mdboth referencebazel run //:traceability_gate, but only//scripts_bazel:traceability_gateexists. Consumer repos that calldocs()will getdocs,docs_check,needs_json, etc. — but nottraceability_gate. The macro should create an alias or wrapper so the documented command works out of the box. (See inline comments oncommands.mdand the how-to.) -
metrics.jsonis an undeclared Bazel output._write_metrics_jsonwrites toPath(app.outdir) / "metrics.json"as a side effect of the Sphinx build, butdocs.bzldoesn't declare it as an output of the:needs_jsontarget. Forbazel runthis may work because the sandbox is relaxed, but forbazel build(which CI typically uses) the file may not be preserved or accessible at the documented path. This was flagged by @a-zw as well — needs explicit Bazel integration.
Should fix
-
Placeholder GitHub URL in plain_links mode.
_render_code_linkgenerateshttps://github.com/placeholder/placeholder/blob/unknown/…for Bazel sandbox builds. These fake URLs could propagate intoneeds.jsonand eventually into external-needs HTML. Consider using just the local path without a URL (the<>separator format already supports name-only rendering), or at minimum a clearly-sentinel scheme likelocal://. -
Unrelated changes inflate the diff. The
.pre-commit-config.yamlswitch totools/run_tool.shwrappers, thetools/run_tool.shscript itself, and the shellcheck/actionlint aliases inBUILDare orthogonal to traceability metrics. Splitting them into a separate PR would make both easier to review and revert independently. -
traceability_gateBazel target pullsall_requirementsbut only uses stdlib. The gate script imports onlyargparse,json,os,sys,pathlib. Thedeps = all_requirementsinscripts_bazel/BUILDadds ~30 pip packages for no reason, slowing down resolution and increasing the dependency surface.
Can be fixed later
-
process_requirementscomputed but not exported tometrics.json/ gate.compute_traceability_summaryreturns process-requirement metrics and the dashboard shows them, but_write_metrics_jsondoesn't include them in the JSON output and the gate can't enforce thresholds on them. Fine for v1, but worth noting as a gap for future iterations. -
Global mutable
_DEFAULT_INCLUDE_EXTERNALintraceability_dashboard.py. Using module-level state withset_default_include_external()works but is fragile for testing and any future multi-build scenario. A config-object approach would be cleaner. -
CSS
!importantinscore_design.css.box-shadow: none !importanton.score-grid .sd-cardoverrides any consuming repo's card styling. This is fine for now but may need scoping to dashboard-specific containers later.
What's good
- The metrics.json schema is well-designed: versioned, documented, and strict (
additionalProperties: false). This will make evolution manageable. - Reusing
compute_traceability_summaryacross dashboard filters and JSON export guarantees consistency — the numbers you see in the pie charts are the same ones CI gates on. - The gate script is clean and focused: it reads pre-computed data, prints a human-readable summary, and exits with distinct codes (0/1/2). Easy to integrate into any CI system.
- Test coverage for the new code is good, especially the
test_traceability_metrics.pyandtraceability_gate_test.pysuites which cover edge cases like external needs, multiple types, and broken references. - The how-to guide (
dashboards_and_quality_gates.rst) reads well and covers the right workflow: local → aggregated → CI rollout.
| | `bazel run //:docs` | Builds documentation (also writes `metrics.json` via the score_metamodel extension) | | ||
| | `bazel run //:docs_check` | Verifies documentation correctness | | ||
| | `bazel run //:docs_combo` | Builds combined documentation with all external dependencies included | | ||
| | `bazel run //:traceability_gate -- --metrics-json bazel-bin/needs_json/_build/needs/metrics.json --min-req-code 70 --min-req-test 70 --min-req-fully-linked 60 --min-tests-linked 70` | Reads the pre-computed metrics.json and fails if coverage thresholds are not met | |
There was a problem hiding this comment.
AI-assisted review (Claude):
//:traceability_gate is documented here but no such target is defined in BUILD or created by the docs() macro in docs.bzl. Only //scripts_bazel:traceability_gate exists.
For the docs-as-code repo itself, add an alias in BUILD. For consumer repos, the docs() macro should create this target automatically — similar to how it creates //:docs_check and //:needs_json.
Must fix: The documented command won't work for anyone.
There was a problem hiding this comment.
Fix-commit created by AI (Claude)
Fixed in 0d9195cb: docs() macro now creates a traceability_gate alias target for consumer repos.
| .. code-block:: bash | ||
|
|
||
| bazel run //:docs && \ | ||
| bazel run //:traceability_gate -- \ |
There was a problem hiding this comment.
AI-assisted review (Claude):
Same issue: bazel run //:traceability_gate won't work for consumer repos because the docs() macro doesn't create this target. The how-to should also clarify the dependency: the gate needs metrics.json which is a side-effect of the Sphinx build, but Bazel doesn't declare it as an output — so bazel build //:needs_json && bazel run //:traceability_gate may not find the file.
Consider wiring the gate as a Bazel test target that depends on :needs_json, so bazel test //:traceability_gate does the right thing automatically.
There was a problem hiding this comment.
Fix-commit created by AI (Claude)
Fixed in 0d9195cb: docs() macro now creates a traceability_gate alias. The how-to has also been updated to clarify the dependency on metrics.json from the Sphinx build (69254cfe).
| main = "traceability_gate.py", | ||
| visibility = ["//visibility:public"], | ||
| deps = all_requirements, | ||
| ) |
There was a problem hiding this comment.
AI-assisted review (Claude):
traceability_gate.py only uses stdlib modules (argparse, json, os, sys, pathlib). deps = all_requirements adds ~30 pip packages unnecessarily.
| ) | |
| deps = [], |
There was a problem hiding this comment.
Fix-commit created by AI (Claude)
Fixed in 44dcc749: deps changed to [] — the gate script only uses stdlib modules.
| def _render_code_link(plain_links: bool, metadata: RepoInfo, link: NeedLink) -> str: | ||
| if plain_links: | ||
| return ( | ||
| "https://github.com/placeholder/placeholder/blob/unknown/" |
There was a problem hiding this comment.
AI-assisted review (Claude):
This hardcoded placeholder URL will end up in needs.json and could propagate into HTML when needs are imported as external needs by consuming repos. Users clicking these links would land on a 404.
Since plain_links mode is for sandbox builds where git info is unavailable, consider dropping the URL entirely and using just the display name:
return f"{link.file}:{link.line}"The source_code_linker string-link regex already handles the name-only format (no <> separator).
There was a problem hiding this comment.
Fix-commit created by AI (Claude)
Fixed in cba4fa8f: placeholder URL removed. Plain-links mode now renders as file:line without a fake GitHub URL.
The how-to and commands.md reference `bazel run //:traceability_gate` but the docs() macro never created this target. Consumer repos only had access to `//scripts_bazel:traceability_gate` which is an internal path. Add an alias so the documented command works out of the box.
traceability_gate.py only uses stdlib modules (argparse, json, os, sys, pathlib). Having deps = all_requirements added ~30 pip packages for no reason.
In Bazel sandbox builds (plain_links mode), _render_code_link generated a hardcoded https://github.com/placeholder/placeholder/blob/unknown/ URL that would 404 if clicked. Use just the file:line display format instead.
Adding a one-line .clwb entry does not constitute substantial AI generation warranting the Eclipse GenAI header.
Adding a py_binary target declaration is boilerplate, not substantial AI generation.
The condition `getEffectiveLevel() < 10` almost never triggers (only for NOTSET=0), so the function iterated all needs even at WARNING level where debug() output is discarded. Use isEnabledFor(DEBUG) instead.
- docs_verify doesn't exist, the target is docs_check - remove leftover parenthetical from review comment
_write_metrics_json outputs include_external per type but the schema had additionalProperties: false without listing the field, so any schema validation would reject the generated JSON.
Review fixes — batch update (AI-assisted, Claude)Pushed 14 commits addressing findings from the May 11–12 review round. Individual reply comments have been added to each resolved thread. FixesBazel integration
Documentation cleanup
Implementation state page
Source code linker
Housekeeping
Still open / not addressed here
|
…tion Bazel sandbox builds have no git metadata, so plain_links mode must still produce a URL that passes the metamodel ^https://github.com/.* pattern. The <> separator ensures sphinx-needs displays only the local path.
|
Here are all my changes on one screen, please have a look: https://github.com/eclipse-score/docs-as-code/pull/484/changes/fcde17875d5ef3b72366b9ac61fa2a03ded36123..389675d6e6845eecf9593649b05dc4a303bc5f5b |
Looks good for me |

-> see eclipse-score/score#2774
Frank Scholter Peres frank.scholter_peres@mercedes-benz.com, Mercedes-Benz Tech Innovation GmbH
Provider Information